Reference Knowledgeable Network for Machine Reading Comprehension

نویسندگان

چکیده

Multi-choice Machine Reading Comprehension (MRC) as a challenge requires models to select the most appropriate answer from set of candidates with given passage and question. Most existing researches focus on modeling specific tasks or complex networks, without explicitly referring relevant credible external knowledge sources, which are supposed greatly make up for deficiency passage. Thus we propose novel reference-based enhancement model called xmlns:xlink="http://www.w3.org/1999/xlink">Reference xmlns:xlink="http://www.w3.org/1999/xlink">Knowledgeable xmlns:xlink="http://www.w3.org/1999/xlink">Network (RekNet) , simulates human reading strategies refine critical information quote explicit in necessity. In detail, xmlns:xlink="http://www.w3.org/1999/xlink">RekNet refines fine-grained defines it xmlns:xlink="http://www.w3.org/1999/xlink">Reference Span then quotes quadruples by co-occurrence candidates. The proposed is evaluated three multi-choice MRC benchmarks: RACE, DREAM Cosmos QA, obtaining consistent remarkable performance improvement observable statistical significance level over strong baselines. Our code available at https://github.com/Yilin1111/RekNet .

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Answer Networks for Machine Reading Comprehension

We propose a simple yet robust stochastic answer network (SAN) that simulates multistep reasoning in machine reading comprehension. Compared to previous work such as ReasoNet, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves result...

متن کامل

Evaluating Machine Reading Systems through Comprehension Tests

This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. We report here the conclusions and lessons learned after the first campaign in 2011.

متن کامل

Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability

Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems. In this study, two classes of metrics were adopted for evaluating RC datasets: prerequisite skills and readability. We applied these classes to six existing datasets, including MCTest and SQuAD, and highlighted the characteristics of the datasets according to ea...

متن کامل

Evaluation Experiment for Reading Comprehension of Machine Translation Outputs

This paper proposes evaluation methods for reading comprehension of English to Japanese translation outputs. The methods were designed not only to evaluate the performance of current systems, but to evaluate the performance of future systems had the current problems been solved. The experiments have shown that the proposed methods are capable of producing results that are statistically signific...

متن کامل

Dataset for the First Evaluation on Chinese Machine Reading Comprehension

Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attentions. However, existing reading comprehension datasets are mostly in English. To add diversity in reading comprehension datasets, in this paper we propose a new Chinese reading comprehension dataset for accelerating related research in the community. The proposed dataset contains two diff...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE/ACM transactions on audio, speech, and language processing

سال: 2022

ISSN: ['2329-9304', '2329-9290']

DOI: https://doi.org/10.1109/taslp.2022.3164219